Skip to content

Conversation

@bart0401
Copy link
Contributor

Description:
Clears model_settings when ModelFallbackMiddleware switches to a fallback model. Previously, provider-specific settings like Anthropic's cache_control would persist and cause errors when passed to other providers (e.g., OpenAI throwing TypeError: unexpected keyword argument 'cache_control').

Issue: Fixes #33709

Dependencies: None


Testing:

  • Added sync/async tests verifying model_settings are cleared on fallback
  • All existing tests pass

Reset model_settings when switching to fallback model to prevent
provider-specific parameters from causing errors.

Fixes langchain-ai#33709
@github-actions github-actions bot added langchain Related to the package `langchain` v1 Issue specific to LangChain 1.0 fix labels Oct 29, 2025
# Try fallback models
for fallback_model in self.models:
request.model = fallback_model
request.model_settings = {}
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Could you do this using override? request = request.override(model=fallback_model)

@eyurtsev
Copy link
Collaborator

Taking over the implementation

@eyurtsev eyurtsev self-assigned this Oct 29, 2025
@bart0401
Copy link
Contributor Author

Thank you so much for the helpful review and for taking over the implementation!
I really appreciate you taking the time to guide me toward a better solution.
Looking forward to seeing this merged. Thanks again! 🙏

@eyurtsev
Copy link
Collaborator

@bart0401 I pushed this to avoid the mutation in place on the model parameter, not the model_settings.

We'd need a different solution for model settings, since we can't assume it's safe to clear them

@eyurtsev eyurtsev changed the title fix(langchain): clear model_settings on fallback fix(langchain): use override in model fallbacks Oct 29, 2025
@github-actions github-actions bot added fix and removed fix labels Oct 29, 2025
bart0401 added a commit to bart0401/bart0401 that referenced this pull request Oct 29, 2025
@ivancastanop
Copy link

But what if your fallback model is also an Anthropic model? You shouldn't be removing the context-control from it.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

fix langchain Related to the package `langchain` v1 Issue specific to LangChain 1.0

Projects

None yet

Development

Successfully merging this pull request may close these issues.

AnthropicPromptCachingMiddleware breaks model fallback mechanism with incompatible cache_control parameter

3 participants